343 research outputs found

    Credimus

    Full text link
    We believe that economic design and computational complexity---while already important to each other---should become even more important to each other with each passing year. But for that to happen, experts in on the one hand such areas as social choice, economics, and political science and on the other hand computational complexity will have to better understand each other's worldviews. This article, written by two complexity theorists who also work in computational social choice theory, focuses on one direction of that process by presenting a brief overview of how most computational complexity theorists view the world. Although our immediate motivation is to make the lens through which complexity theorists see the world be better understood by those in the social sciences, we also feel that even within computer science it is very important for nontheoreticians to understand how theoreticians think, just as it is equally important within computer science for theoreticians to understand how nontheoreticians think

    Pair algebra and its application to automata theory

    Get PDF

    Completeness Results for Parameterized Space Classes

    Full text link
    The parameterized complexity of a problem is considered "settled" once it has been shown to lie in FPT or to be complete for a class in the W-hierarchy or a similar parameterized hierarchy. Several natural parameterized problems have, however, resisted such a classification. At least in some cases, the reason is that upper and lower bounds for their parameterized space complexity have recently been obtained that rule out completeness results for parameterized time classes. In this paper, we make progress in this direction by proving that the associative generability problem and the longest common subsequence problem are complete for parameterized space classes. These classes are defined in terms of different forms of bounded nondeterminism and in terms of simultaneous time--space bounds. As a technical tool we introduce a "union operation" that translates between problems complete for classical complexity classes and for W-classes.Comment: IPEC 201

    Computing with and without arbitrary large numbers

    Full text link
    In the study of random access machines (RAMs) it has been shown that the availability of an extra input integer, having no special properties other than being sufficiently large, is enough to reduce the computational complexity of some problems. However, this has only been shown so far for specific problems. We provide a characterization of the power of such extra inputs for general problems. To do so, we first correct a classical result by Simon and Szegedy (1992) as well as one by Simon (1981). In the former we show mistakes in the proof and correct these by an entirely new construction, with no great change to the results. In the latter, the original proof direction stands with only minor modifications, but the new results are far stronger than those of Simon (1981). In both cases, the new constructions provide the theoretical tools required to characterize the power of arbitrary large numbers.Comment: 12 pages (main text) + 30 pages (appendices), 1 figure. Extended abstract. The full paper was presented at TAMC 2013. (Reference given is for the paper version, as it appears in the proceedings.

    Unification and Logarithmic Space

    Full text link
    We present an algebraic characterization of the complexity classes Logspace and NLogspace, using an algebra with a composition law based on unification. This new bridge between unification and complexity classes is inspired from proof theory and more specifically linear logic and Geometry of Interaction. We show how unification can be used to build a model of computation by means of specific subalgebras associated to finite permutations groups. We then prove that whether an observation (the algebraic counterpart of a program) accepts a word can be decided within logarithmic space. We also show that the construction can naturally represent pointer machines, an intuitive way of understanding logarithmic space computing

    Characterizing the Existence of Optimal Proof Systems and Complete Sets for Promise Classes.

    Get PDF
    In this paper we investigate the following two questions: Q1: Do there exist optimal proof systems for a given language L? Q2: Do there exist complete problems for a given promise class C? For concrete languages L (such as TAUT or SAT) and concrete promise classes C (such as NP∩coNP, UP, BPP, disjoint NP-pairs etc.), these ques-tions have been intensively studied during the last years, and a number of characterizations have been obtained. Here we provide new character-izations for Q1 and Q2 that apply to almost all promise classes C and languages L, thus creating a unifying framework for the study of these practically relevant questions. While questions Q1 and Q2 are left open by our results, we show that they receive affirmative answers when a small amount on advice is avail-able in the underlying machine model. This continues a recent line of research on proof systems with advice started by Cook and Kraj́ıček [6]

    On Measuring Non-Recursive Trade-Offs

    Full text link
    We investigate the phenomenon of non-recursive trade-offs between descriptional systems in an abstract fashion. We aim at categorizing non-recursive trade-offs by bounds on their growth rate, and show how to deduce such bounds in general. We also identify criteria which, in the spirit of abstract language theory, allow us to deduce non-recursive tradeoffs from effective closure properties of language families on the one hand, and differences in the decidability status of basic decision problems on the other. We develop a qualitative classification of non-recursive trade-offs in order to obtain a better understanding of this very fundamental behaviour of descriptional systems

    Exact Cover with light

    Full text link
    We suggest a new optical solution for solving the YES/NO version of the Exact Cover problem by using the massive parallelism of light. The idea is to build an optical device which can generate all possible solutions of the problem and then to pick the correct one. In our case the device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible covers (exact or not) of the given set. For selecting the correct solution we assign to each item, from the set to be covered, a special integer number. These numbers will actually represent delays induced to light when it passes through arcs. The solution is represented as a subray arriving at a certain moment in the destination node. This will tell us if an exact cover does exist or not.Comment: 20 pages, 4 figures, New Generation Computing, accepted, 200

    Small Universal Accepting Networks of Evolutionary Processors with Filtered Connections

    Full text link
    In this paper, we present some results regarding the size complexity of Accepting Networks of Evolutionary Processors with Filtered Connections (ANEPFCs). We show that there are universal ANEPFCs of size 10, by devising a method for simulating 2-Tag Systems. This result significantly improves the known upper bound for the size of universal ANEPFCs which is 18. We also propose a new, computationally and descriptionally efficient simulation of nondeterministic Turing machines by ANEPFCs. More precisely, we describe (informally, due to space limitations) how ANEPFCs with 16 nodes can simulate in O(f(n)) time any nondeterministic Turing machine of time complexity f(n). Thus the known upper bound for the number of nodes in a network simulating an arbitrary Turing machine is decreased from 26 to 16

    Solving the subset-sum problem with a light-based device

    Full text link
    We propose a special computational device which uses light rays for solving the subset-sum problem. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible subsets of the given set. To each arc we assign either a number from the given set or a predefined constant. When the light is passing through an arc it is delayed by the amount of time indicated by the number placed in that arc. At the destination node we will check if there is a ray whose total delay is equal to the target value of the subset sum problem (plus some constants).Comment: 14 pages, 6 figures, Natural Computing, 200
    corecore